Privacy, security, and bandwidth constraints have led to federated learning (FL) in wireless systems, where training a machine learning (ML) model is accomplished collaboratively without sharing raw data. Often, such collaborative FL strategies necessitate model aggregation at a server. On the other hand, decentralized FL necessitates that participating clients reach a consensus ML model by exchanging parameter updates. In this work, we propose the over-the-air clustered wireless FL (CWFL) strategy, which eliminates the need for a strong central server and yet achieves an accuracy similar to the server-based strategy while using fewer channel uses as compared to decentralized FL. We theoretically show that the convergence rate of CWFL per cluster is O(1/T) while mitigating the impact of noise. Using the MNIST and CIFAR datasets, we demonstrate the accuracy performance of CWFL for the different number of clusters across communication rounds.
translated by 谷歌翻译
We introduce camouflaged data poisoning attacks, a new attack vector that arises in the context of machine unlearning and other settings when model retraining may be induced. An adversary first adds a few carefully crafted points to the training dataset such that the impact on the model's predictions is minimal. The adversary subsequently triggers a request to remove a subset of the introduced points at which point the attack is unleashed and the model's predictions are negatively affected. In particular, we consider clean-label targeted attacks (in which the goal is to cause the model to misclassify a specific test point) on datasets including CIFAR-10, Imagenette, and Imagewoof. This attack is realized by constructing camouflage datapoints that mask the effect of a poisoned dataset.
translated by 谷歌翻译
State-of-the-art pre-trained language models (PLMs) outperform other models when applied to the majority of language processing tasks. However, PLMs have been found to degrade in performance under distribution shift, a phenomenon that occurs when data at test-time does not come from the same distribution as the source training set. Equally as challenging is the task of obtaining labels in real-time due to issues like long-labeling feedback loops. The lack of adequate methods that address the aforementioned challenges constitutes the need for approaches that continuously adapt the PLM to a distinct distribution. Unsupervised domain adaptation adapts a source model to an unseen as well as unlabeled target domain. While some techniques such as data augmentation can adapt models in several scenarios, they have only been sparsely studied for addressing the distribution shift problem. In this work, we present an approach (MEMO-CL) that improves the performance of PLMs at test-time under distribution shift. Our approach takes advantage of the latest unsupervised techniques in data augmentation and adaptation to minimize the entropy of the PLM's output distribution. MEMO-CL operates on a batch of augmented samples from a single observation in the test set. The technique introduced is unsupervised, domain-agnostic, easy to implement, and requires no additional data. Our experiments result in a 3% improvement over current test-time adaptation baselines.
translated by 谷歌翻译
Building an AI agent that can design on its own has been a goal since the 1980s. Recently, deep learning has shown the ability to learn from large-scale data, enabling significant advances in data-driven design. However, learning over prior data limits us only to solve problems that have been solved before and biases data-driven learning towards existing solutions. The ultimate goal for a design agent is the ability to learn generalizable design behavior in a problem space without having seen it before. We introduce a self-learning agent framework in this work that achieves this goal. This framework integrates a deep policy network with a novel tree search algorithm, where the tree search explores the problem space, and the deep policy network leverages self-generated experience to guide the search further. This framework first demonstrates an ability to discover high-performing generative strategies without any prior data, and second, it illustrates a zero-shot generalization of generative strategies across various unseen boundary conditions. This work evaluates the effectiveness and versatility of the framework by solving multiple versions of two engineering design problems without retraining. Overall, this paper presents a methodology to self-learn high-performing and generalizable problem-solving behavior in an arbitrary problem space, circumventing the needs for expert data, existing solutions, and problem-specific learning.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Recent developments in the methods of explainable AI (XAI) methods allow researchers to explore the inner workings of deep neural networks (DNNs), revealing crucial information about input-output relationships and realizing how data connects with machine learning models. In this paper we explore interpretability of DNN models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC). We review a subset of existing top tagger models and explore different quantitative methods to identify which features play the most important roles in identifying the top jets. We also investigate how and why feature importance varies across different XAI metrics, how feature correlations impact their explainability, and how latent space representations encode information as well as correlate with physically meaningful quantities. Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models. We additionally illustrate the activity of hidden layers as Neural Activation Pattern (NAP) diagrams and demonstrate how they can be used to understand how DNNs relay information across the layers and how this understanding can help to make such models significantly simpler by allowing effective model reoptimization and hyperparameter tuning. By incorporating observations from the interpretability studies, we obtain state-of-the-art top tagging performance from augmented implementation of existing network
translated by 谷歌翻译
为了在高移动性虚拟环境中实现柔软物体的高富度触觉渲染,我们提出了一种新颖的触觉显示dandeliontouch。一群无人机将触觉执行器传递给用户的指尖。 DandelionTouch的用户能够在不受设备工作区域限制的大空间中体验触觉反馈。重要的是,在与虚拟物体的长时间互动中,他们不会经历肌肉疲劳。手动跟踪和群控制算法允许用手动运动引导群,并避免在编队内部发生冲突。在这项研究中,研究了群体之间的阻抗连接的几种拓扑结构。该实验在实时在正方形轨迹上执行了一个遵循的实验,该实验表明,在恒星拓扑中连接的无人机执行了平均位置误差较低的轨迹(与其他阻抗拓扑相比,RMSE降低了20.6 \%与潜在的基于现场的群体控制相比,为40.9 \%。在所有具有阻抗行为的地层中,无人机的达到的速度比通过潜在场算法控制的群体高28%。此外,在与7名参与者的用户研究中评估了几种纤维骨架模式的感知。该研究表明,提议的时间延迟和频率调制的组合使用户可以同时成功识别VR中的表面特性和运动方向(平均识别率为70 \%,最大为93 \%)。 DandelionTouch建议在VR系统中提出一种新型的触觉反馈,无需手持或可穿戴界面。
translated by 谷歌翻译
基于模糊规则的系统(FRBS)是一个基于规则的系统,它使用语言模糊变量作为前身,因此代表人类可理解的知识。它们已应用于整个文献的各种应用和领域。但是,FRBS遭受了许多缺点,例如不确定性表示,大量规则,解释性损失,学习时间高的计算时间等,以克服FRBS的这些问题,存在许多范围的FRBS。在本文中,我们介绍了模糊系统(FRBS)的各种类型和突出领域的概述和文献综述,即遗传模糊系统(GFS),层次结构模糊系统(HFS),Neuro Fuzzy System(NFS),不断发展的模糊系统(EFS)(EFS)(EFS) ),在2010 - 2021年期间,用于大数据的FRBS,用于数据不平衡数据的FRBS,用于不平衡数据的FRBS,用于使用集群质心作为模糊规则的FRB和FRBS。 GFS使用遗传/进化方法来提高FRBS的学习能力,HFS解决了FRBS的尺寸诅咒,NFS在EFS中考虑使用神经网络和动态系统来提高FRBS的近似能力,并且在EFS中考虑了动态系统。 FRBs被视为大数据和不平衡数据的好解决方案,近年来,由于高维度和大数据和规则,使用集群质心来限制FRBS中的规则数量,因此FRBS的可解释性已受欢迎。本文还强调了该领域的重要贡献,出版统计和当前趋势。该论文还涉及几个需要从FRBS研究社区进一步关注的开放研究领域。
translated by 谷歌翻译
以图形为中心的人工智能(Graph AI)在建模自然界中普遍存在的相互作用系统(从生物学的动态系统到粒子物理学)方面取得了显着成功。数据的异质性的增加,需要对可以结合多种电感偏见的图形神经体系结构。但是,将来自各种来源的数据组合起来是具有挑战性的,因为适当的归纳偏差可能会因数据模式而异。多模式学习方法融合了多个数据模式,同时利用跨模式依赖性来应对这一挑战。在这里,我们调查了以图形为中心的AI的140项研究,并意识到,使用图越来越多地将各种数据类型汇集在一起​​,并将其馈入复杂的多模型模型。这些模型分为图像,语言和知识接地的多模式学习。我们提出了基于此分类的多模式图学习的算法蓝图。该蓝图是通过选择适当的四个不同组件来处理多模式数据的最先进架构的方法。这项工作可以为标准化精致的多模式体系结构的设计铺平道路,以解决高度复杂的现实世界问题。
translated by 谷歌翻译
建模是什么使广告有说服力的原因,即引起消费者的所需响应,对于宣传,社会心理学和营销的研究至关重要。尽管其重要性,但计算机视觉中说服力的计算建模仍处于起步阶段,这主要是由于缺乏可以提供与ADS相关的说服力标签的基准数据集。由社会心理学和市场营销中的说服文学的激励,我们引入了广泛的说服策略词汇,并建立了用说服策略注释的第一个AD图像语料库。然后,我们通过多模式学习制定说服策略预测的任务,在该任务中,我们设计了一个多任务注意融合模型,该模型可以利用其他广告理解的任务来预测说服策略。此外,我们对30家财富500家公司的1600个广告活动进行了真实的案例研究,我们使用模型的预测来分析哪些策略与不同的人口统计学(年龄和性别)一起使用。该数据集还提供图像分割掩码,该蒙版在测试拆分上标记了相应的AD图像中的说服力策略。我们公开发布代码和数据集https://midas-research.github.io/persuasion-avertisements/。
translated by 谷歌翻译